What is MASEM?

  • Meta-analysis may also be used to synthesize findings in structural equation models. The class of techniques is called meta-analytic structural equation modeling (MASEM; Cheung, 2021).1
  • It uses meta-analytic techniques to combine correlation matrices and fits structural equation models on the average correlation matrix. It has the best of both worlds.

Terms used in the literature

  • Several names have been used interchangeably in the literature,
    • meta-analytic path analysis;
    • meta-analysis of factor analysis;
    • meta-analytical structural equations analysis;
    • path analysis of meta-analytically derived correlation matrices;
    • SEM of a meta-analytic correlation matrix;
    • path analysis based on meta-analytic findings; and
    • model-based meta-analysis.
  • We use the generic term meta-analytic structural equation modeling (MASEM) to describe this class of techniques.

Benefits of MASEM to SEM researchers

  • MASEM allows SEM researchers to integrate findings over several studies.
  • MASEM may test the consistency of the proposed models across studies.
  • MASEM may study how the model varies according to the study characteristics via moderator analysis.

Benefits of MASEM to meta-analysis researchers

  • MASEM tests theoretically relevant models, e.g., a mediation model, rather than individual correlations.
  • MASEM allows researchers to compare several models.

Problems in primary research using SEM

  • Low statistical power in psychological studies;
  • Contradictory findings with significant tests;
  • Confirmation bias: reluctant to consider alternative models in SEM;
  • Different researchers have proposed different models.
  • Conducting more empirical studies may not necessarily decrease the uncertainty of a particular topic if the findings are inconsistent (National Research Council, 1992).2
  • MASEM may be used to address many of these issues.

Example 1: Brown and Stayman (1992)

  • Topic: Antecedents and consequences of attitudes toward the advertisement (Ad)3
    • No. of variables: 5
    • No. of studies: 47
    • Pooled sample size across studies: +4,600

Brown and Stayman (1992)

Example 2: Premack and Hunter (1988)

  • Topic: Individual unionization decisions4
    • No. of variables: 6
    • No. of studies: 14
    • Pooled sample size across studies: +2,800

Premack and Hunter (1988)

Example 3: Norton et al. (2013)

  • There are ten different factor structures on the Hospital Anxiety and Depression Scale supported by some empirical data.5
  • The authors identified 28 independent samples from 21 studies (N=21,820).
  • They found that the bifactor structure consisting of a general distress factor and anxiety and depression group factors fitted the data best.

Norton et al. (2013)

Example 4: Murayama and Elliot (2012)

  • Based on their meta-analysis, these authors found that the association between competition and performance was close to zero.6
  • They proposed two mediators (performance-approach goals and performance-avoidance goals) to explain this apparent zero correlation.
  • The total effect of competition on performance is close to zero because the indirect effects were positive and negative (474 studies with 139,464 participants).

Murayama and Elliot (2012, Figure 1)

Example 5: Hagger et al. (2022)

  • There are several meta-analyses on the Theory of Planned Behavior. However, almost no meta-analysis has addressed the moderating effect of perceived behavioral control (PBC).
  • Hagger et al. (2022) meta-analyzed 39 datasets on health behaviors from the Hagger and Hamilton labs.7
  • Only the interaction between PBC and intention to predict behavior was statistically significant. In contrast, the interactions between PBC and subjective norm and between PBC and attitude to predict intention were not.

Hagger et al. (2022, Figure 1)

Example 6: Isvoranu, Epskamp, and Cheung (2022)

  • Network analysis is a popular alternative framework to the latent variable model in explaining association among observed variables.8 It has been widely used in psychopathology, including post-traumatic stress disorder (PTSD), psychosis, depression, and personality research.
  • Epskamp, Isvoranu, and Cheung (2022)9 combined meta-analysis and network analysis to integrate findings in the context of network analysis.
  • Isvoranu, Epskamp, and Cheung (2022)10 meta-analyzed 52 samples from 33 studies of network structures on PTSD.
  • Results are more stable as they are based on more data.

Isvoranu, Epskamp, and Cheung (2022, Figure 3)

Key procedures and decisions in conducting MASEM

  1. Identify key research questions, constructs, measurements, and structural equation models: Before conducting the MASEM, researchers have to formulate research questions and identify all the relevant key constructs, measurement models, and structural equation models.
  2. Formulate clear inclusion/exclusion criteria. This step is essential in all meta-analyses, including MASEM, because it provides theoretical justifications on whether the selected studies can be meaningfully combined.
  3. Identify and extract the relevant data, including correlation matrices, sample sizes, and study characteristics (moderators).
  4. There are several approaches to conducting MASEM. Choose an appropriate approach to combine the correlation matrices and fit the structural equation models.

Approaches to MASEM

  • Nearly all methods use a two-stage approach to conducting MASEM:
    • Stage 1 analysis: Combine the correlation matrices into a pooled correlation matrix;
    • Stage 2 analysis: Fit structural equation models on the pooled correlation matrix.
  • Two-stage approaches:
    • Univariate approach (Viswesvaran & Ones 1995);11
    • Generalized least squares (GLS; Becker, 1992);12
    • Two-stage structural equation modeling (TSSEM; Cheung, 2014; Cheung & Chan, 2005).13, 14
  • One-stage approaches:
    • One-stage MASEM (OSMASEM; Jak & Cheung, 2020).15
    • Bayesian MASEM (Ke, Zhang, & Tong, 2019)16

Jak and Cheung (2020, Table 1)

Univariate approach (Viswesvaran & Ones, 1995)

  • The univariate approach is the most popular MASEM approach. It is particularly attractive in management and organizational studies.
  • The main attractive feature is its ease of use.

Viswesvaran and Ones (1995) citations over time

Viswesvaran and Ones (1995) citations across journals

Basic ideas

  • Stage 1 analysis
    • It synthesizes the correlation coefficients in a correlation matrix as if the correlation coefficients were independent.
    • Incomplete correlation coefficients are handled by pairwise deletion.
    • Problem: when the pairwise deletion is used, missing correlations are assumed to be missing completely at random (MCAR).
  • Stage 2 analysis
    • The pooled correlation matrix is used as if it was a covariance matrix.
    • Researchers usually use the harmonic mean of the sample sizes as the sample size.
  • Problems:
    • The correlation matrix is analyzed as if it was a covariance matrix. The test statistics and the standard errors (SEs) may be incorrect (Cudeck, 1989).17
    • One sample size is used to represent the precision of the average correlation matrix.
  • For example, \(R_1 = \left[ {\begin{array}{c|ccc} & x1 & x2 & x3\\ \hline x1& 1.0& & & \\ x2& 0.6& 1.0& & \\ x3& 0.5& 0.7& 1.0 &\\ \end{array} } \right]\),
  • \(R_2 = \left[ {\begin{array}{c|ccc} & x1 & x2 & x3\\ \hline x1& 1.0& & & \\ x2& 0.6& 1.0& & \\ x3& NA& NA& NA&\\ \end{array} } \right]\),
  • \(R_3 = \left[ {\begin{array}{c|ccc} & x1 & x2 & x3\\ \hline x1& 1.0& & & \\ x2& NA& 1.0& & \\ x3& 0.7& NA& 1.0 &\\ \end{array} } \right]\), and
  • \(R_4 = \left[ {\begin{array}{c|ccc} & x1 & x2 & x3\\ \hline x1& 1.0& & & \\ x2& NA& 1.0& & \\ x3& NA& 0.5& 1.0 &\\ \end{array} } \right]\).
  • The average correlation matrix is \(\bar{R} = \left[ {\begin{array}{c|ccc} & x1 & x2 & x3\\ \hline x1& 1.0& & & \\ x2& 0.6& 1.0& & \\ x3& 0.6& 0.6& 1.0 &\\ \end{array} } \right]\).

One (sample) size does not fit all

  • There are some problems fitting SEMs with a single sample size in the univariate approach.
  • Let us consider the average correlation matrix as an example.
  • The sample sizes for the correlations \(r_{21}\), \(r_{31}\), and \(r_{32}\) are 200, 500, and 1,000, respectively.
  • The arithmetic mean, the harmonic mean, and the median are 567, 375, and 500, respectively.
  • If the harmonic mean (375) is used as the sample size in SEM, some estimated SEs are over-estimated while others are under-estimated.
  • The sample size affects the chi-square test, some goodness-of-fit indices, and SEs.
  • \(\bar{R} = \left[ {\begin{array}{c|ccc} & x1 & x2 & x3\\ \hline x1& 1.0& & & \\ x2& 0.6& 1.0& & \\ x3& 0.6& 0.6& 1.0 &\\ \end{array} } \right]\).

Problems of treating a correlation matrix as a covariance matrix

  • Analysis of the correlation matrix:
    • The diagonals are always 1. They do not carry any information.
    • The diagonals of \(\hat{\Sigma}\) are precisely 1.
  • The chi-square test or SEs can be incorrect if we treat a correlation matrix as if it was a covariance matrix in SEM.
  • Cudeck (1989) warned of the problems of treating correlation matrices as covariance matrices in SEM more than 30 years ago. However, many researchers still ignore the warnings in MASEM today.18

A simulation study comparing some of these methods

  • Jak and Cheung (2020)19 (see Jak & Cheung, 2022)20 conducted a simulation study comparing several approaches.
  • General findings on the univariate approach:
    • The standard errors are seriously underestimated.
    • The parameter estimates are correct.
    • The chi-square test statistics are incorrect.

Accuracy of the test statistics in rejecting the proposed models

Jak and Cheung (2022, test statistics)

Accuracy of the parameter estimates

Jak and Cheung (2022, parameter estimates)

Accuracy of the standard errors

Jak and Cheung (2022, standard errors)

GLS approach

  • Stage 1:
    • The GLS approach uses a multivariate meta-analysis to synthesize correlation matrices in the first stage.
    • \(\boldsymbol{r}_i = \boldsymbol{\rho} + \boldsymbol{u}_i + \boldsymbol{e}_i\),
      • where \(\boldsymbol{r}_i\) is a vector of the sample correlation coefficients in the \(i\)th study,
      • \(\boldsymbol{\rho}\) is a vector of the average population correlation coefficients,
      • \(\mathrm{Var}(\boldsymbol{u}_i)=T^2\) is the variance-covariance matrix of the random effects,
      • and \(\mathrm{Var}(\boldsymbol{e}_i)=V_i\) is the known sampling covariance matrix of \(\boldsymbol{r}_i\).
  • Stage 2:
    • After estimating the average population correlation vector \(\hat{\rho}\) and its sampling covariance matrix, researchers may manually calculate the path coefficients and their standard errors.
  • The GLS approach has several limitations:
    • It only fits saturated path models without any latent factors.
    • It is difficult to implement it as some matrix calculations are involved.
    • Cheung and Chan (2005) solved these issues by showing how some SEM software can be used to apply the GLS approach.

TSSEM approach

  • The TSSEM approach is very similar to the GLS approach but is based on SEM.
  • Fixed-effects TSSEM (Cheung & Chan, 2005)21
    • Researchers are only interested in the studies included in the meta-analysis;
    • A common correlation/covariance matrix is assumed.
  • Random-effects TSSEM (Cheung, 2014)22
    • Studies are samples of a larger population;
    • Researchers may want to generalize the findings beyond the studies included;
    • Studies may have their own correlation/covariance matrices.

Fixed-effects TSSEM

  • The distribution theory of SEM is based on covariance matrices, whereas correlation matrices are usually used in MASEM.
  • We create additional parameters (D) so that SEM can correctly analyze correlation matrices.
    • \(\Sigma (\theta) = DP(\theta)D\),
    • where \(\Sigma(\theta)\) is the structural model on the covariance matrix, \(D\) is a diagonal matrix, and \(P(\theta)\) is the structural model on the correlation matrix with the constraints that \(\mathrm{Diag}(P(\theta)) = 1\) with \(1\) is a vector of ones (Joreskog & Sorbom, 1996).23
  • For example, \(\Sigma (\theta) =\begin{bmatrix} 4.0 \\ 1.8 & 9.0 \\ 3.2 & 6.0 & 16.0\end{bmatrix}=D*P*D=\begin{bmatrix} 2 \\ 0 & 3 \\ 0 & 0 & 4\end{bmatrix} \begin{bmatrix} 1 \\ 0.3 & 1 \\ 0.4 & 0.5 & 1\end{bmatrix}\begin{bmatrix} 2 \\ 0 & 3 \\ 0 & 0 & 4\end{bmatrix}\).
  • \(D\) is required to make sure that the distribution theory of covariance applies to correlation matrices.
  • Note: we use the multiple-group SEM approach, which does not need to estimate the known sampling covariance matrix of the correlation coefficients. Thus, it has better performance than the GLS approach under the fixed-effects model.

Fixed-effects TSSEM: Stage 1 analysis (1)

  • The fixed-effects TSSEM is essentially a multiple-group SEM:
    • Common correlation matrix: \(P_\mathrm{F}=P_1=P_2=\ldots=P_k\), where \(D_i\) may vary across studies.
    • Test of homogeneity of correlation matrices: \(H_0: P_1=P_2=\ldots=P_k\) compared to a model that \(P_i\) may differ across studies.
    • Incomplete or missing correlations are efficiently handled by maximum likelihood estimation (Muthenn, Kaplan, & Hollis, 1987).24
    • The assumption of the homogeneity of correlation matrices may also be tested by the goodness-of-fit indices, such as the RMSEA and SRMR.

Fixed-effects TSSEM: Stage 1 analysis (2)

  • Most MASEM applications are based on correlation matrices because the primary studies’ measurements and scales may differ.
  • We may analyze covariance matrices if all studies use the same measurement and scale. The advantage is that we may test the measurement properties (measurement invariance) of the data.
  • Analysis of covariance matrices (Cheung & Chan, 2009):25
    • Common covariance matrix: \(\Sigma_\mathrm{F}=\Sigma_1=\Sigma_2=\ldots=\Sigma_k\)
    • Test of homogeneity of covariance matrices: \(H_0: \Sigma_1=P_2=\ldots=\Sigma_k\) compared to a model that \(\Sigma_i\) may vary across studies.

Fixed-effects TSSEM: Stage 2 analysis (1)

  • After the first-stage analysis, the estimated common correlation matrix \(R_\mathrm{F}\) and its asymptotic sampling covariance matrix \(V_\mathrm{F}\) are available. \(V_\mathrm{F}\) is critical in the analysis because it indicates the precision of the \(R_\mathrm{F}\).
  • When handling square matrices, it is easier to convert them into vectors, e.g.,
    • \(X = \begin{bmatrix}1 & 2 & 3 & 4\\ 2 & 5 & 6 & 7\\ 3 & 6 & 8 & 9\\ 4 & 7 & 9 & 10 \end{bmatrix}\),
    • \(\mathrm{vech}(X)=\begin{bmatrix}1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10\end{bmatrix}^\mathrm{T}\) for covariance matrix, and
    • \(\mathrm{vechs}(X)=\begin{bmatrix}2 & 3 & 4 & 6 & 7 & 9\end{bmatrix}^\mathrm{T}\) for correlation matrix.
  • For example,
  • A sample covariance matrix, \(S = \left[ {\begin{array}{c|ccc} & x1 & x2 & x3\\ \hline x1& 1.5& & & \\ x2& 0.6& 2.1& & \\ x3& 0.5& 0.7& 1.7 &\\ \end{array} } \right]\), \(s=\mathrm{vech}(S)=\begin{bmatrix}1.5 \\ 0.6 \\ 0.5 \\ 2.1 \\ 0.7 \\ 1.7\end{bmatrix}\).
  • A sample correlation matrix \(R = \left[ {\begin{array}{c|ccc} & x1 & x2 & x3\\ \hline x1& 1.0& & & \\ x2& 0.6& 1.0& & \\ x3& 0.5& 0.7& 1.0 &\\ \end{array} } \right]\), \(r=\mathrm{vechs}(R)=\begin{bmatrix}0.6 \\ 0.5 \\ 0.7\end{bmatrix}\).

Fixed-effects TSSEM: Stage 2 analysis (2)

  • Suppose \(R_\mathrm{F} = \left[ {\begin{array}{c|ccc} & x1 & x2 & x3\\ \hline x1& 1.0& & & \\ x2& 0.6& 1.0& & \\ x3& 0.5& 0.7& 1.0 &\\ \end{array} } \right]\), \(r=\begin{bmatrix}0.6 \\ 0.5 \\ 0.7\end{bmatrix}\).
  • Let us consider two scenarios:
    • Small (red curve) \(V_\mathrm{F} = \left[ {\begin{array}{c|ccc} & r21 & r31 & r32\\ \hline r21& 0.1& & & \\ r31& 0.02& 0.1& & \\ r32& 0.02& 0.02& 0.1&\\ \end{array} } \right]\).
    • Large (blue curve) \(V_\mathrm{F} = \left[ {\begin{array}{c|ccc} & r21 & r31 & r32\\ \hline r21& 0.2& & & \\ r31& 0.04& 0.2& & \\ r32& 0.04& 0.04& 0.2&\\ \end{array} } \right]\).
  • When we fit the SEM, we expect the standard errors should be larger in the second scenario because the inputs are less precise.

Fixed-effects TSSEM: Stage 2 analysis (3)

  • Suppose the proposed correlation structure is \(\rho_\mathrm{F}(\theta)=\mathrm{vechs}(P_\mathrm{F}(\theta))\), the discrepancy function based on the weighted least squares (WLS) is (Bentler & Savalei, 2010):26
    • \(F_\mathrm{WLS}(\theta) = (r_\mathrm{F} - \rho_\mathrm{F}(\theta))^\mathrm{T} V_\mathrm{F}^{-1} (r_\mathrm{F} - \rho_\mathrm{F}(\theta))\)
  • The logic of the WLS estimation method is to weigh the correlation elements by the inverse of its sampling covariance matrix, that is, heavier weight for more precise estimates.
  • The WLS ensures that the parameter estimates, standard errors, and fit statistics are correct. In contrast, the univariate approach does not work well because it does not take the precision of the estimated correlation matrix in the analysis.
  • Previous simulation studies in SEM show that WLS requires a large sample size to work well. Luckily, this is not a concern in MASEM, which usually has a large sample size.

Fixed-effects TSSEM: Stage 2 analysis (4)

  • An example:
    • The sample sizes for the correlations are 200, 500, and 1,000.
    • The asymptotic variance on \(\bar{r}_{21}\) is larger than those on \(\bar{r}_{31}\) and \(\bar{r}_{32}\) because \(\bar{r}_{21}\) is based on a smaller sample size (and thus, a larger SE).
    • Because \(V_\mathrm{F}\) is inverted, less weight is given to \(\bar{r}_{21}\) than to \(\bar{r}_{32}\). Therefore, the TSSEM approach takes the precision of the estimates from the stage one analysis into account using different weights.

Fixed-effects TSSEM: Stage 2 analysis (5)

  • SEM models are usually based on covariance matrices, whereas MASEM models are based on correlation matrices.
  • Let us consider a simple one-factor CFA model with three indicators.
  • Model-implied covariance matrix: \(\Sigma(\theta) = \left[ {\begin{array}{c|ccc} & x1 & x2 & x3\\ \hline x1& a^2 + e11& & & \\ x2& ab & b^2 + e22 & & \\ x3& ac & bc & c^2 + e33 &\\ \end{array} } \right]\).
  • Model-implied correlation matrix: \(P(\theta) = \left[ {\begin{array}{c|ccc} & x1 & x2 & x3\\ \hline x1& 1& & & \\ x2& ab & 1& & \\ x3& ac & bc & 1&\\ \end{array} } \right]\).

  • It is important to note that the diagonals of the correlation matrix are always 1, regardless of the models.
  • There are two approaches to analyzing correlation matrices.
  • Approach 1:
    • \(P(\theta) = \left[ {\begin{array}{c|ccc} & x1 & x2 & x3\\ \hline x1& 1 = a^2 + e11& & & \\ x2& ab & 1 = b^2 + e22 & & \\ x3& ac & bc & 1 = c^2 + e33 &\\ \end{array} } \right]\).
    • Impose nonlinear constraints to ensure the diagonals are 1. In the above example, it means,
      • \(1 = a^2 + e11\),
      • \(1 = b^2 + e22\), and
      • \(1 = c^2 + e33\).
    • Advantage: we can get all parameter estimates including \(e11\), \(e22\), and \(e33\).
    • Disadvantage: slightly more technical issues, e.g., non-convergence.
  • Approach 2:
    • \(P(\theta) = \left[ {\begin{array}{c|ccc} & x1 & x2 & x3\\ \hline x1& 1& & & \\ x2& ab & 1& & \\ x3& ac & bc & 1&\\ \end{array} } \right]\).
    • Only use the off-diagonals in the analysis. Thus, there is no estimate on \(e11\), \(e22\), and \(e33\). However, we can still calculate them using the constraints.
    • Advantage: fewer technical issues.
    • Disadvantage: The error variances are not estimable.
  • Both options are implemented in the metaSEM package. We will illustrate them later.
## Load the libraries
library(metaSEM)
library(symSEM)

## Specify the model
cfa <- "f =~ a*x1 + b*x2 + c*x3
        f ~~ 1*f            ## Fix the factor variance at 1 for identification
        x1 ~~ e11*x1        ## Label the error variances
        x2 ~~ e22*x2
        x3 ~~ e33*x3"

## Plot the model
plot(cfa, color="yellow")

## Convert it to RAM specification
## We will introduce the RAM specification in later.
RAM <- lavaan2RAM(cfa, obs.variables=c("x1", "x2", "x3"))

## Print the model-implied covariance matrix
impliedS(RAM, corr=FALSE)
## $Sigma
##    x1        x2        x3       
## x1 "e11+a^2" "a*b"     "a*c"    
## x2 "b*a"     "e22+b^2" "b*c"    
## x3 "c*a"     "c*b"     "e33+c^2"
## 
## $mu
##   x1 x2 x3
## 1  0  0  0
## 
## $corr
## [1] FALSE
## Print the model-implied correlation matrix
impliedS(RAM, corr=TRUE)
## $Sigma
##    x1    x2    x3   
## x1 "1"   "a*b" "a*c"
## x2 "b*a" "1"   "b*c"
## x3 "c*a" "c*b" "1"  
## 
## $mu
##   x1 x2 x3
## 1  0  0  0
## 
## $corr
## [1] TRUE

  1. Cheung, M. W.-L. (2021). Meta-analytic structural equation modeling. In Oxford Research Encyclopedia of Business and Management. Oxford University Press. https://doi.org/10.1093/acrefore/9780190224851.013.225↩︎

  2. National Research Council (1992). Combining information: Statistical issues and opportunities for research. Washington, D.C.: National Academy Press.↩︎

  3. Brown, S. P., & Stayman, D. M. (1992). Antecedents and consequences of attitude toward the ad: A meta-analysis. Journal of Consumer Research, 19, 34-51.↩︎

  4. Premack, S. L., & Hunter, J. E. (1988). Individual unionization decisions. Psychological Bulletin, 103, 223-234.↩︎

  5. Norton, S., Cosco, T., Doyle, F., Done, J., & Sacker, A. (2013). The Hospital Anxiety and Depression Scale: A meta confirmatory factor analysis. Journal of Psychosomatic Research, 74(1), 74-81.↩︎

  6. Murayama, K., & Elliot, A. J. (2012). The competition-performance relation: A meta-analytic review and test of the opposing processes model of competition and performance. Psychological Bulletin, 138(6), 1035-1070. http://doi.org/10.1037/a0028324↩︎

  7. Hagger, M. S., Cheung, M. W.-L., Ajzen, I., & Hamilton, K. (2022). Perceived behavioral control moderating effects in the theory of planned behavior: A meta-analysis. Health Psychology, 41(2), 155–167.↩︎

  8. Epskamp, S., Borsboom, D., & Fried, E. I. (2018). Estimating psychological networks and their accuracy: A tutorial paper. Behavior Research Methods, 50(1), 195–212.↩︎

  9. Epskamp, S., Isvoranu, A.-M., & Cheung, M. W.-L. (2022). Meta-analytic Gaussian network aggregation. Psychometrika, 87(1), 12–46.↩︎

  10. Isvoranu, A.-M., Epskamp, S., & Cheung, M. W.-L. (2021). Network models of posttraumatic stress disorder: A meta-analysis. Journal of Abnormal Psychology, 130(8), 841–861.↩︎

  11. Viswesvaran, C., & Ones, D. S. (1995). Theory testing: Combining psychometric meta-analysis and structural equations modeling. Personnel Psychology, 48(4), 865-885.↩︎

  12. Becker, B. J. (1992). Using results from replicated studies to estimate linear models. Journal of Educational Statistics, 17(4), 341-362.↩︎

  13. Cheung, M. W.-L. (2014). Fixed- and random-effects meta-analytic structural equation modeling: Examples and analyses in R. Behavior Research Methods, 46(1), 29-40. http://doi.org/10.3758/s13428-013-0361-y↩︎

  14. Cheung, M. W.-L., & Chan, W. (2005). Meta-analytic structural equation modeling: A two-stage approach. Psychological Methods, 10(1), 40-64. http://doi.org/10.1037/1082-989X.10.1.40↩︎

  15. Jak, S., & Cheung, M. W.-L. (2020). Meta-analytic structural equation modeling with moderating effects on SEM parameters. Psychological Methods, 25(4), 430–455. https://doi.org/10.1037/met0000245↩︎

  16. Ke, Z., Zhang, Q., & Tong, X. (2019). Bayesian meta-analytic SEM: A one-stage approach to modeling between-studies heterogeneity in structural parameters. Structural Equation Modeling: A Multidisciplinary Journal, 26(3), 348–370. https://doi.org/10.1080/10705511.2018.1530059↩︎

  17. Cudeck, R. (1989). Analysis of correlation matrices using covariance structure models. Psychological Bulletin, 105(2), 317-327.↩︎

  18. Cudeck, R. (1989). Analysis of correlation matrices using covariance structure models. Psychological Bulletin, 105(2), 317–327. https://doi.org/10.1037/0033-2909.105.2.317↩︎

  19. Jak, S., & Cheung, M. W.-L. (2020). Meta-analytic structural equation modeling with moderating effects on SEM parameters. Psychological Methods, 25(4), 430–455. https://doi.org/10.1037/met0000245↩︎

  20. Jak, S., & Cheung, M. W. (2022, March 16). Can findings from meta-analytic structural equation modeling in management and organizational psychology be trusted?. https://psyarxiv.com/b3qvn.↩︎

  21. Cheung, M. W.-L., & Chan, W. (2005). Meta-analytic structural equation modeling: A two-stage approach. Psychological Methods, 10(1), 40–64. https://doi.org/10.1037/1082-989X.10.1.40↩︎

  22. Cheung, M. W.-L. (2014). Fixed- and random-effects meta-analytic structural equation modeling: Examples and analyses in R. Behavior Research Methods, 46(1), 29–40. https://doi.org/10.3758/s13428-013-0361-y↩︎

  23. Joreskog, K. G., & Sorbom, D. (1996). LISREL 8: A user-s reference guide. Chicago, IL: Scientific Software International, Inc.↩︎

  24. Muthen, B., Kaplan, D., & Hollis, M. (1987). On structural equation modeling with data that are not missing completely at random. Psychometrika, 52(3), 431-462. http://doi.org/10.1007/BF02294365↩︎

  25. Cheung, M. W.-L., & Chan, W. (2009). A two-stage approach to synthesizing covariance matrices in meta-analytic structural equation modeling. Structural Equation Modeling: A Multidisciplinary Journal, 16(1), 28-53. http://doi.org/10.1080/10705510802561295↩︎

  26. Bentler, P. M., & Savalei, V. (2010). Analysis of correlation structures: Current status and open problems. In S. Kolenikov, D. Steinley, & L. Thombs (Eds.), Statistics in the Social Sciences (pp. 1-36). New Jersey: John Wiley & Sons, Inc.↩︎